Towards Comparative Mining of Web Document Objects with NFA: WebOMiner System

نویسندگان

  • Christie I. Ezeife
  • Titas Mutsuddy
چکیده

The process of extracting comparative heterogeneous web content data which are derived and historical from related web pages is still at its infancy and not developed. Discovering potentially useful and previously unknown information or knowledge from web contents such as “list all articles on ‘Sequential Pattern Mining’ written between 2007 and 2011 including title, authors, volume, abstract, paper, citation, year of publication,” would require finding the schema of web documents from different web pages, performing web content data integration, building their virtual or physical data warehouse before web content extraction and mining from the database. This paper proposes a technique for automatic web content data extraction, the WebOMiner system, which models web sites of a specific domain like Business to Customer (B2C) web sites, as object oriented database schemas. Then, non-deterministic finite state automata (NFA) based wrappers for recognizing content types from this domain are built and used for extraction of related contents from data blocks into an integrated database for future second level mining for deep knowledge discovery. DOI: 10.4018/jdwm.2012100101 2 International Journal of Data Warehousing and Mining, 8(4), 1-21, October-December 2012 Copyright © 2012, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited. is needed for effective derived and historical querying of web content data. Some researchers adopted the web data extraction system in virtual approach without creating physical data base and warehouse (Bornhövd & Buchmann, 1999) but may have difficulty with contents like images. There are some other information or block in the web pages such as advertisement, attached pages, copyright notices. These are also web contents and usually not considered as part of the primary page information. These unwanted information in a web page are called noise information, and usually need to be cleaned before mining the web contents (Gupta et al., 2005; Ezeife & Ohanekwu, 2005; Li & Ezeife, 2006). Borges and Leven (1999) categorized web mining into three areas: web structure mining, web usage mining and web content mining. Web usage mining processes usage information or the history of user’s visit to different web pages, which are generally stored in chronological order in web log file, server log, error log and cookie log (Buchner & Mulvenna, 1998; Ezeife & Lui, 2009; Priya & Vadivel, 2012).When any mechanism is used to extract relevant and important information from web documents or to discover knowledge or pattern from web documents, it is then called web content mining. Traditional mechanisms include: providing a language to extract certain pattern from web pages, discovering frequent patterns, clustering for document classification, machine learning for wrapper (e.g., data extraction program) induction, and automatic wrapper generation (Liu & Chen-Chung-Chang, 2004; Muslea, Minton, & Knoblock, 1999; Zhao et al., 2005; Crescenzi, Mecca, & Merialdo, 2001; Liu, 2007). All these traditional mechanisms are unable to catch heterogeneous web contents together as they strictly rely on web document presentation structure. Existing extractors also are limited with regards to finding comparative historical and derived information from web documents. Creating more robust automatic wrappers for multiple data sources requires incorporating efficient techniques for automatic schema (attribute) match, some of which techniques are presented in Lewis and Janeja (2011). Methods for testing the quality of extracted and integrated information can also be incorporated in the future (Golfarelli & Rizzi, 2011). Some sample queries that may not be accurately answered by existing systems are: 1. Provide a comparative analysis of products including sales, comments on four retail store web sites in the past 1 year. 2. List all 17” LCD Samsung monitor selling around Toronto with price range less than $200. In Annoni and Ezeife (2009), a model for representing web contents as objects was presented. They encapsulate web contents in object-oriented class hierarchies which would enable catching heterogeneous contents together in a unified way without strictly relying on web page presentation structure, using six web content data types.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Query Architecture Expansion in Web Using Fuzzy Multi Domain Ontology

Due to the increasing web, there are many challenges to establish a general framework for data mining and retrieving structured data from the Web. Creating an ontology is a step towards solving this problem. The ontology raises the main entity and the concept of any data in data mining. In this paper, we tried to propose a method for applying the "meaning" of the search system, But the problem ...

متن کامل

RRLUFF: Ranking function based on Reinforcement Learning using User Feedback and Web Document Features

Principal aim of a search engine is to provide the sorted results according to user’s requirements. To achieve this aim, it employs ranking methods to rank the web documents based on their significance and relevance to user query. The novelty of this paper is to provide user feedback-based ranking algorithm using reinforcement learning. The proposed algorithm is called RRLUFF, in which the rank...

متن کامل

Hybrid Adaptive Educational Hypermedia ‎Recommender Accommodating User’s Learning ‎Style and Web Page Features‎

Personalized recommenders have proved to be of use as a solution to reduce the information overload ‎problem. Especially in Adaptive Hypermedia System, a recommender is the main module that delivers ‎suitable learning objects to learners. Recommenders suffer from the cold-start and the sparsity problems. ‎Furthermore, obtaining learner’s preferences is cumbersome. Most studies have only focused...

متن کامل

A survey on Automatic Text Summarization

Text summarization endeavors to produce a summary version of a text, while maintaining the original ideas. The textual content on the web, in particular, is growing at an exponential rate. The ability to decipher through such massive amount of data, in order to extract the useful information, is a major undertaking and requires an automatic mechanism to aid with the extant repository of informa...

متن کامل

Web pages ranking algorithm based on reinforcement learning and user feedback

The main challenge of a search engine is ranking web documents to provide the best response to a user`s query. Despite the huge number of the extracted results for user`s query, only a small number of the first results are examined by users; therefore, the insertion of the related results in the first ranks is of great importance. In this paper, a ranking algorithm based on the reinforcement le...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • IJDWM

دوره 8  شماره 

صفحات  -

تاریخ انتشار 2012